Menu Top
Classwise Concept with Examples
6th 7th 8th 9th 10th 11th 12th

Class 12th Chapters
1. Relations and Functions 2. Inverse Trigonometric Functions 3. Matrices
4. Determinants 5. Continuity and Differentiability 6. Application of Derivatives
7. Integrals 8. Application of Integrals 9. Differential Equations
10. Vector Algebra 11. Three Dimensional Geometry 12. Linear Programming
13. Probability

Content On This Page
Introduction to Determinants Properties of Determinants Area of a Triangle
Adjoint and Inverse of a Square Matrix Solution of a System of Linear Equations Solution of Linear Equations by Determinants


Chapter 4 Determinants (Concepts)

Building upon the algebra and operations of matrices learned in the previous chapter, this crucial chapter introduces the concept of the Determinant, a unique scalar value (a real or complex number) that can be calculated from the elements of a square matrix only. Determinants encapsulate important information about the matrix, particularly concerning invertibility, and serve as essential tools in solving systems of linear equations, finding areas, and various other applications in linear algebra, calculus, and physics.

The definition and calculation of a determinant depend on the order of the square matrix:

Directly calculating determinants for larger matrices using the cofactor expansion can be computationally intensive. Fortunately, determinants possess several fundamental Properties that greatly simplify their evaluation:

  1. The value of the determinant remains unchanged if its rows are interchanged with its columns ($|A| = |A^T|$).
  2. If any two rows (or columns) of a determinant are interchanged, the sign of the determinant changes.
  3. If any two rows (or columns) of a determinant are identical (or proportional), then the value of the determinant is zero.
  4. If each element of a single row (or column) is multiplied by a constant $k$, then the value of the determinant gets multiplied by $k$.
  5. If some or all elements of a row (or column) are expressed as the sum of two (or more) terms, then the determinant can be expressed as the sum of two (or more) determinants.
  6. Perhaps the most useful property for simplification: If, to each element of any row (or column), we add $k$ times the corresponding elements of another row (or column), the value of the determinant remains unchanged. This property is extensively used to introduce zeros into a row or column, making the cofactor expansion much easier.

Using cofactors, we define the Adjoint of a square matrix $A$, denoted by $\mathbf{adj(A)}$. The adjoint is the transpose of the matrix formed by the cofactors of the elements of $A$. That is, if $C = [C_{ij}]$ is the matrix of cofactors, then $adj(A) = C^T$. The adjoint matrix has a remarkable relationship with the original matrix and its determinant: $$ \mathbf{A(\text{adj } A) = (\text{adj } A)A = |A|I} $$ where $I$ is the identity matrix of the same order. This fundamental result directly leads to the formula for finding the Inverse of a Square Matrix. If $A$ is a square matrix, its inverse $A^{-1}$ exists if and only if its determinant is non-zero ($|A| \neq 0$). Such a matrix is called non-singular. If $|A| = 0$, the matrix is called singular and its inverse does not exist. For a non-singular matrix $A$, the inverse is given by: $$ \mathbf{A^{-1} = \frac{1}{|A|} (\text{adj } A)} $$

Finally, determinants find significant applications:

Determinants thus provide powerful computational and theoretical tools within linear algebra.



Introduction to Determinants

Every square matrix can be associated with a unique number called its determinant. Determinants are useful in solving systems of linear equations, finding the inverse of a matrix, and in calculus.

For a square matrix $A$, its determinant is denoted by $|A|$ or $\det(A)$. Note that $|A|$ does not mean the absolute value of A in this context; it is a specific notation for the determinant.


Determinant of a Matrix of Order One

If $A = [a_{11}]$ is a matrix of order $1 \times 1$, its determinant is defined as the value of the single element within the matrix.

$|A| = \det(A) = a_{11}$

... (1)

Example 1. Find the determinant of the matrix $A = [-5]$.

Answer:

Given matrix $A = [-5]$.

The determinant of a $1 \times 1$ matrix is the element itself.

$|A| = \det(A) = -5$.


Determinant of a Matrix of Order Two

If $A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}$ is a square matrix of order $2 \times 2$, its determinant is defined as the difference between the product of the elements on the main diagonal (top-left to bottom-right) and the product of the elements on the anti-diagonal (top-right to bottom-left).

$|A| = \det(A) = \begin{vmatrix} a & b \\ c & d \end{vmatrix} = ad - bc$

... (2)

Example 2. Evaluate the determinant $\begin{vmatrix} 2 & 3 \\ 4 & 5 \end{vmatrix}$.

Answer:

Given determinant is $\begin{vmatrix} 2 & 3 \\ 4 & 5 \end{vmatrix}$.

Using the formula for a $2 \times 2$ determinant:

$\begin{vmatrix} 2 & 3 \\ 4 & 5 \end{vmatrix} = (2)(5) - (3)(4)$

$= 10 - 12$

$= -2$


Determinant of a Matrix of Order Three

The determinant of a $3 \times 3$ matrix can be evaluated by expanding it along any row or any column. This expansion involves terms formed by the elements of the chosen row/column multiplied by their corresponding cofactors. To understand cofactors, we first need to define minors.

Minor ($M_{ij}$)

The minor of an element $a_{ij}$ (located in the $i$-th row and $j$-th column) in a determinant is the determinant of the square submatrix obtained by deleting the $i$-th row and the $j$-th column containing the element $a_{ij}$.

For a general $3 \times 3$ matrix $A = \begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{pmatrix}$, let's find some minors:

The minor of $a_{11}$, denoted $M_{11}$, is the determinant of the matrix left after removing the 1st row and 1st column:

$M_{11} = \begin{vmatrix} a_{22} & a_{23} \\ a_{32} & a_{33} \end{vmatrix} = a_{22}a_{33} - a_{23}a_{32}$

The minor of $a_{23}$, denoted $M_{23}$, is the determinant of the matrix left after removing the 2nd row and 3rd column:

$M_{23} = \begin{vmatrix} a_{11} & a_{12} \\ a_{31} & a_{32} \end{vmatrix} = a_{11}a_{32} - a_{12}a_{31}$

Cofactor ($A_{ij}$ or $C_{ij}$)

The cofactor of an element $a_{ij}$ is related to its minor $M_{ij}$ by a sign factor. It is defined as $A_{ij} = (-1)^{i+j} M_{ij}$, where $i$ is the row number and $j$ is the column number of the element $a_{ij}$.

The sign factor $(-1)^{i+j}$ results in the following pattern of signs for the cofactors in a $3 \times 3$ matrix:

$\begin{pmatrix} (-1)^{1+1} & (-1)^{1+2} & (-1)^{1+3} \\ (-1)^{2+1} & (-1)^{2+2} & (-1)^{2+3} \\ (-1)^{3+1} & (-1)^{3+2} & (-1)^{3+3} \end{pmatrix} = \begin{pmatrix} + & - & + \\ - & + & - \\ + & - & + \end{pmatrix}$

Example: The cofactor of $a_{11}$ is $A_{11} = (-1)^{1+1} M_{11} = (+1) M_{11} = M_{11}$.

The cofactor of $a_{12}$ is $A_{12} = (-1)^{1+2} M_{12} = (-1) M_{12} = -M_{12}$.

The cofactor of $a_{23}$ is $A_{23} = (-1)^{2+3} M_{23} = (-1) M_{23} = -M_{23}$.

Expansion of a Determinant

The determinant of a matrix $A = [a_{ij}]$ of order $n$ is the sum of the products of the elements of any row (or column) with their corresponding cofactors. This is also known as the Laplace expansion.

For a $3 \times 3$ matrix $A = \begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{pmatrix}$, we can calculate the determinant by expanding along any row or column. The value obtained will be the same regardless of the choice of row or column.

Let's expand along the first row:

$\det(A) = a_{11}A_{11} + a_{12}A_{12} + a_{13}A_{13}$

... (3)

Substituting the cofactor definitions ($A_{ij} = (-1)^{i+j} M_{ij}$):

$\det(A) = a_{11}(-1)^{1+1}M_{11} + a_{12}(-1)^{1+2}M_{12} + a_{13}(-1)^{1+3}M_{13}$

$\det(A) = a_{11}M_{11} - a_{12}M_{12} + a_{13}M_{13}$

... (4)

Expanding $M_{11}, M_{12}, M_{13}$ using the $2 \times 2$ determinant formula (Equation 2):

$M_{11} = \begin{vmatrix} a_{22} & a_{23} \\ a_{32} & a_{33} \end{vmatrix} = a_{22}a_{33} - a_{23}a_{32}$

$M_{12} = \begin{vmatrix} a_{21} & a_{23} \\ a_{31} & a_{33} \end{vmatrix} = a_{21}a_{33} - a_{23}a_{31}$

$M_{13} = \begin{vmatrix} a_{21} & a_{22} \\ a_{31} & a_{32} \end{vmatrix} = a_{21}a_{32} - a_{22}a_{31}$

Substituting these into Equation 4:

$\det(A) = a_{11}(a_{22}a_{33} - a_{23}a_{32}) - a_{12}(a_{21}a_{33} - a_{23}a_{31}) + a_{13}(a_{21}a_{32} - a_{22}a_{31})$

... (5)

This formula can be used to compute the determinant directly from the elements. Choosing a row or column with more zeros simplifies the calculation as the terms involving the zero elements become zero.

Example 3. Evaluate the determinant of the matrix $A = \begin{pmatrix} 1 & 2 & 4 \\ -1 & 3 & 0 \\ 4 & 1 & 0 \end{pmatrix}$.

Answer:

Given matrix $A = \begin{pmatrix} 1 & 2 & 4 \\ -1 & 3 & 0 \\ 4 & 1 & 0 \end{pmatrix}$. We need to find $\det(A)$.

We can expand along any row or column. Observing that the 3rd column contains two zeros, expanding along the 3rd column will be the easiest method. The elements of the 3rd column are $a_{13}=4$, $a_{23}=0$, $a_{33}=0$.

Using the expansion formula along the 3rd column:

$\det(A) = a_{13}A_{13} + a_{23}A_{23} + a_{33}A_{33}$

$\det(A) = 4 \cdot A_{13} + 0 \cdot A_{23} + 0 \cdot A_{33}$

$\det(A) = 4 \cdot A_{13}$

Now, we calculate the cofactor $A_{13}$.

$A_{13} = (-1)^{1+3} M_{13} = (-1)^4 M_{13} = (+1) M_{13}$

The minor $M_{13}$ is the determinant of the submatrix obtained by deleting the 1st row and 3rd column of $A$:

$M_{13} = \begin{vmatrix} -1 & 3 \\ 4 & 1 \end{vmatrix}$

Using the $2 \times 2$ determinant formula:

$M_{13} = (-1)(1) - (3)(4) = -1 - 12 = -13$

So, the cofactor $A_{13} = (+1) \cdot (-13) = -13$.

Finally, substitute the value of $A_{13}$ back into the determinant expansion:

$\det(A) = 4 \cdot A_{13} = 4 \cdot (-13) = -52$

Alternate Answer:

Let's evaluate the determinant by expanding along the first row. The elements of the first row are $a_{11}=1$, $a_{12}=2$, $a_{13}=4$.

Using the expansion formula along the 1st row:

$\det(A) = a_{11}A_{11} + a_{12}A_{12} + a_{13}A_{13}$

We need to calculate the cofactors $A_{11}, A_{12}, A_{13}$.

$A_{11} = (-1)^{1+1} M_{11} = M_{11} = \begin{vmatrix} 3 & 0 \\ 1 & 0 \end{vmatrix} = (3)(0) - (0)(1) = 0 - 0 = 0$

$A_{12} = (-1)^{1+2} M_{12} = -M_{12} = -\begin{vmatrix} -1 & 0 \\ 4 & 0 \end{vmatrix} = -((-1)(0) - (0)(4)) = -(0 - 0) = 0$

$A_{13} = (-1)^{1+3} M_{13} = M_{13} = \begin{vmatrix} -1 & 3 \\ 4 & 1 \end{vmatrix} = (-1)(1) - (3)(4) = -1 - 12 = -13$

Now, substitute these cofactors and the first row elements into the expansion formula:

$\det(A) = 1 \cdot A_{11} + 2 \cdot A_{12} + 4 \cdot A_{13}$

$\det(A) = 1 \cdot (0) + 2 \cdot (0) + 4 \cdot (-13)$

$\det(A) = 0 + 0 - 52$

$\det(A) = -52$

Both methods yield the same result, $-52$. This confirms that the choice of row or column for expansion does not affect the determinant value.


Determinant of a Matrix of Order Three using Sarrus Rule (Only for $3 \times 3$ Matrices)

Sarrus' rule is a simple mnemonic that provides a formula for calculating the determinant of a $3 \times 3$ matrix. It is important to note that this rule is valid only for $3 \times 3$ matrices and does not apply to determinants of order higher than three.

To use Sarrus' rule for a matrix $A = \begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{pmatrix}$, we rewrite the first two columns of the matrix to the right of the original determinant notation:

$\begin{vmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{vmatrix} \begin{matrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ a_{31} & a_{32} \end{matrix}$

The determinant is then calculated as the sum of the products of the elements along the three diagonals running from top-left to bottom-right, minus the sum of the products of the elements along the three diagonals running from top-right to bottom-left.

Main diagonals (top-left to bottom-right): $a_{11}a_{22}a_{33}$, $a_{12}a_{23}a_{31}$, $a_{13}a_{21}a_{32}$

Anti-diagonals (top-right to bottom-left): $a_{13}a_{22}a_{31}$, $a_{11}a_{23}a_{32}$, $a_{12}a_{21}a_{33}$

The formula for the determinant using Sarrus' rule is:

$\det(A) = (a_{11}a_{22}a_{33} + a_{12}a_{23}a_{31} + a_{13}a_{21}a_{32}) - (a_{13}a_{22}a_{31} + a_{11}a_{23}a_{32} + a_{12}a_{21}a_{33})$

... (6)

This formula is equivalent to the cofactor expansion shown in Equation 5. Sarrus' rule provides a quick way to compute the determinant for $3 \times 3$ matrices but lacks the theoretical basis provided by cofactor expansion, which is applicable to determinants of any order.



Properties of Determinants

Understanding the properties of determinants significantly simplifies their evaluation and manipulation, especially for larger matrices where cofactor expansion can become very complex. These properties are fundamental tools used in solving systems of linear equations, finding eigenvalues, and other advanced topics in linear algebra.

Let $A$ be a square matrix.


Property 1: Determinant of the Transpose

The value of the determinant of a square matrix remains unchanged if its rows and columns are interchanged. In other words, the determinant of a matrix is equal to the determinant of its transpose.

If $A'$ denotes the transpose of matrix $A$, then:

$\det(A') = \det(A)$

... (1)

This property implies that any property of a determinant involving rows also applies to columns, and vice-versa.

Example 1. Verify Property 1 for the matrix $A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}$.

Answer:

Given matrix $A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}$.

First, calculate the determinant of $A$:

$\det(A) = \begin{vmatrix} 1 & 2 \\ 3 & 4 \end{vmatrix} = (1)(4) - (2)(3) = 4 - 6 = -2$

Next, find the transpose of matrix $A$. The transpose $A'$ is obtained by interchanging the rows and columns of $A$.

$A' = \begin{pmatrix} 1 & 3 \\ 2 & 4 \end{pmatrix}$

Now, calculate the determinant of $A'$:

$\det(A') = \begin{vmatrix} 1 & 3 \\ 2 & 4 \end{vmatrix} = (1)(4) - (3)(2) = 4 - 6 = -2$

Since $\det(A') = -2$ and $\det(A) = -2$, we have $\det(A') = \det(A)$. Thus, Property 1 is verified for this matrix.


Property 2: Effect of Interchanging Rows or Columns

If any two rows (or any two columns) of a determinant are interchanged, the sign of the determinant changes, but its absolute value remains the same.

Let $A$ be a determinant. If a new determinant $B$ is obtained by interchanging any two rows or columns of $A$, then:

$\det(B) = -\det(A)$

... (2)

For example, if $A = \begin{vmatrix} a & b \\ c & d \end{vmatrix}$, then $\det(A) = ad - bc$. If we interchange the rows to get $B = \begin{vmatrix} c & d \\ a & b \end{vmatrix}$, then $\det(B) = cb - da = -(ad - bc) = -\det(A)$.

Example 2. Verify Property 2 for $A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}$ by interchanging its rows.

Answer:

From Example 1, we know that $\det(A) = \begin{vmatrix} 1 & 2 \\ 3 & 4 \end{vmatrix} = -2$.

Let $B$ be the matrix obtained by interchanging the first row ($R_1$) and the second row ($R_2$) of $A$. The operation is $R_1 \leftrightarrow R_2$.

$B = \begin{pmatrix} 3 & 4 \\ 1 & 2 \end{pmatrix}$

Now, calculate the determinant of $B$:

$\det(B) = \begin{vmatrix} 3 & 4 \\ 1 & 2 \end{vmatrix} = (3)(2) - (4)(1) = 6 - 4 = 2$

We compare $\det(B)$ with $-\det(A)$.

$-\det(A) = -(-2) = 2$

Since $\det(B) = 2$ and $-\det(A) = 2$, we have $\det(B) = -\det(A)$. Thus, Property 2 is verified for this matrix.


Property 3: Identical Rows or Columns

If any two rows (or any two columns) of a determinant are identical, then the value of the determinant is zero. Identical means that the corresponding elements in the two rows (or columns) are exactly the same.

If a determinant $A$ has two identical rows or columns, then:

$\det(A) = 0$

... (3)

**Proof Idea:** Assume a determinant $A$ has two identical rows. Let's interchange these two identical rows to obtain a new determinant, say $B$. According to Property 2, the sign of the determinant should change upon interchanging two rows, so $\det(B) = -\det(A)$. However, since the two rows are identical, interchanging them does not change the matrix at all, meaning $B$ is identical to $A$. Therefore, $\det(B) = \det(A)$.

Combining these two results, we have $\det(A) = -\det(A)$. This equation can only be true if $2\det(A) = 0$, which implies $\det(A) = 0$. The same logic applies if two columns are identical.

Example 3. Evaluate the determinant $\begin{vmatrix} 1 & 2 & 3 \\ 1 & 2 & 3 \\ 4 & 5 & 6 \end{vmatrix}$.

Answer:

Given determinant $\begin{vmatrix} 1 & 2 & 3 \\ 1 & 2 & 3 \\ 4 & 5 & 6 \end{vmatrix}$.

Observe that the first row ($R_1$) and the second row ($R_2$) have identical elements: $a_{11}=a_{21}=1$, $a_{12}=a_{22}=2$, $a_{13}=a_{23}=3$.

Since two rows of the determinant are identical ($R_1 = R_2$), by Property 3, the value of the determinant is zero.

$\begin{vmatrix} 1 & 2 & 3 \\ 1 & 2 & 3 \\ 4 & 5 & 6 \end{vmatrix} = 0$


Property 4: Multiplication by a Scalar

If each element of a single row (or a single column) of a determinant is multiplied by a constant $k$, then the value of the determinant is multiplied by $k$.

Let $A$ be a determinant, and $B$ be a determinant obtained by multiplying a specific row (say, the $i$-th row) or column (say, the $j$-th column) of $A$ by a scalar $k$. Then:

$\det(B) = k \cdot \det(A)$

... (4)

For example, $\begin{vmatrix} ka & kb \\ c & d \end{vmatrix} = k \begin{vmatrix} a & b \\ c & d \end{vmatrix}$ and $\begin{vmatrix} a & kb \\ c & kd \end{vmatrix} = k \begin{vmatrix} a & b \\ c & d \end{vmatrix}$.

**Important Note:** This is different from scalar multiplication of a matrix. If a matrix $A$ of order $n \times n$ is multiplied by a scalar $k$ (meaning *every* element is multiplied by $k$), the resulting matrix is $kA$. The determinant of $kA$ is related to $\det(A)$ by $\det(kA) = k^n \det(A)$. This is because multiplying a matrix by $k$ is equivalent to multiplying each of its $n$ rows (or columns) by $k$, and each multiplication by $k$ brings out a factor of $k$.

Example 4. Evaluate $\begin{vmatrix} 2 & 4 \\ 3 & 5 \end{vmatrix}$ and compare its value with $\begin{vmatrix} 1 & 2 \\ 3 & 5 \end{vmatrix}$.

Answer:

Given determinant $\begin{vmatrix} 2 & 4 \\ 3 & 5 \end{vmatrix}$.

Calculate its value:

$\begin{vmatrix} 2 & 4 \\ 3 & 5 \end{vmatrix} = (2)(5) - (4)(3) = 10 - 12 = -2$

Consider the second determinant $\begin{vmatrix} 1 & 2 \\ 3 & 5 \end{vmatrix}$.

Calculate its value:

$\begin{vmatrix} 1 & 2 \\ 3 & 5 \end{vmatrix} = (1)(5) - (2)(3) = 5 - 6 = -1$

Now, observe the relationship between the two determinants. The first row of $\begin{vmatrix} 2 & 4 \\ 3 & 5 \end{vmatrix}$ is $(2, 4)$, which is $2 \times (1, 2)$. The first row of $\begin{vmatrix} 1 & 2 \\ 3 & 5 \end{vmatrix}$ is $(1, 2)$. The second rows are identical $(3, 5)$.

So, $\begin{vmatrix} 2 & 4 \\ 3 & 5 \end{vmatrix} = \begin{vmatrix} 2 \times 1 & 2 \times 2 \\ 3 & 5 \end{vmatrix}$

According to Property 4, if a row is multiplied by a constant (here, $k=2$), the determinant value is multiplied by that constant.

$\begin{vmatrix} 2 & 4 \\ 3 & 5 \end{vmatrix} = 2 \times \begin{vmatrix} 1 & 2 \\ 3 & 5 \end{vmatrix}$

Substituting the calculated values:

$-2 = 2 \times (-1)$

$-2 = -2$

This equality holds true. Thus, the property is verified.


Property 5: Sum of Terms in a Row or Column

If some or all elements of a row (or column) of a determinant are expressed as the sum of two (or more) terms, then the determinant can be expressed as the sum of two (or more) determinants.

For example, if the first column of a $3 \times 3$ determinant has elements $a_1+x, a_2+y, a_3+z$, while the other columns are $C_2$ and $C_3$, then:

$\begin{vmatrix} a_1+x & b_1 & c_1 \\ a_2+y & b_2 & c_2 \\ a_3+z & b_3 & c_3 \end{vmatrix} = \begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix} + \begin{vmatrix} x & b_1 & c_1 \\ y & b_2 & c_2 \\ z & b_3 & c_3 \end{vmatrix}$

... (5)

This property holds for any row or column, and for sums of more than two terms.


Property 6: Row/Column Operations

If, to each element of any row (or column), the equimultiples of corresponding elements of another row (or column) are added, then the value of the determinant remains the same.

This means that applying the elementary row operation $R_i \to R_i + k R_j$ or the elementary column operation $C_i \to C_i + k C_j$ to a determinant does not change its value.

$\det(\text{matrix after } R_i \to R_i + k R_j) = \det(\text{original matrix})$

... (6)

**Proof Idea (using Property 5 and Property 3):** Consider a determinant $\Delta = \begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix}$. Let's perform the operation $R_1 \to R_1 + k R_2$. The new determinant $\Delta'$ is:

$\Delta' = \begin{vmatrix} a_1+ka_2 & b_1+kb_2 & c_1+kc_2 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix}$

Using Property 5, we can split this determinant into the sum of two determinants:

$\Delta' = \begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix} + \begin{vmatrix} ka_2 & kb_2 & kc_2 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix}$

(By Property 5)

The first determinant is the original determinant $\Delta$. In the second determinant, we can take the common factor $k$ out of the first row using Property 4:

$\Delta' = \Delta + k \begin{vmatrix} a_2 & b_2 & c_2 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix}$

(By Property 4)

Now, observe the second determinant. Its first row ($R_1$) and second row ($R_2$) are identical. According to Property 3, the value of a determinant with two identical rows is zero.

$\Delta' = \Delta + k \cdot 0$

(By Property 3)

$\Delta' = \Delta$

... (7)

This proves that the determinant value remains unchanged after applying the operation $R_1 \to R_1 + k R_2$. The same logic applies to other row operations ($R_i \to R_i + k R_j$) and column operations ($C_i \to C_i + k C_j$). This property is fundamental for simplifying determinants by creating zeros in rows or columns, which makes subsequent expansion easier.


Property 7: Row or Column of Zeros

If any row or column of a determinant consists entirely of zeros, then the value of the determinant is zero.

For example, $\begin{vmatrix} 1 & 2 & 3 \\ 0 & 0 & 0 \\ 4 & 5 & 6 \end{vmatrix} = 0$ and $\begin{vmatrix} 1 & 0 & 3 \\ 4 & 0 & 6 \\ 7 & 0 & 9 \end{vmatrix} = 0$.

$\det(A) = 0$ if any row or column of $A$ is the zero vector.

... (8)

**Proof Idea:** Expand the determinant along the row or column that consists of all zeros. The determinant is the sum of the products of the elements of that row/column with their corresponding cofactors. Since every element in the chosen row/column is zero, each term in the expansion will be $0 \times (\text{cofactor})$. The sum of all these zero terms will be zero. For example, expanding the first determinant above along the second row ($R_2$):

$\begin{vmatrix} 1 & 2 & 3 \\ 0 & 0 & 0 \\ 4 & 5 & 6 \end{vmatrix} = 0 \cdot A_{21} + 0 \cdot A_{22} + 0 \cdot A_{23} = 0 + 0 + 0 = 0$.

Example 5. Evaluate $\begin{vmatrix} 1 & 2 & 3 \\ 0 & 0 & 0 \\ 4 & 5 & 6 \end{vmatrix}$.

Answer:

Given determinant $\begin{vmatrix} 1 & 2 & 3 \\ 0 & 0 & 0 \\ 4 & 5 & 6 \end{vmatrix}$.

Observe that the second row ($R_2$) consists entirely of zeros.

According to Property 7, if any row or column of a determinant is all zeros, the value of the determinant is zero.

$\begin{vmatrix} 1 & 2 & 3 \\ 0 & 0 & 0 \\ 4 & 5 & 6 \end{vmatrix} = 0$


Property 8: Determinant of a Product

The determinant of the product of two square matrices of the same order is equal to the product of their individual determinants. This is a very important property, also known as the Binet–Cauchy theorem.

If $A$ and $B$ are square matrices of the same order, then:

$\det(AB) = \det(A) \cdot \det(B)$

... (9)

This property can be extended to the product of more than two matrices: $\det(ABC) = \det(A)\det(B)\det(C)$. It also implies that $\det(A^n) = (\det(A))^n$ for any positive integer $n$.


Property 9: Determinant of Specific Matrix Types

For certain types of matrices, the determinant calculation is simplified:



Area of a Triangle

Determinants provide an elegant method to calculate the area of a triangle when the coordinates of its vertices are known. This approach is particularly useful in coordinate geometry and has a direct connection to the geometric interpretation of determinants.


Formula for Area of a Triangle

Let the vertices of a triangle be $A(x_1, y_1)$, $B(x_2, y_2)$, and $C(x_3, y_3)$ in a Cartesian coordinate system. The area of the triangle ABC can be calculated using the following determinant formula:

Area $= \frac{1}{2} \left| \begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{vmatrix} \right|$

... (1)

Here, the determinant $\begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{vmatrix}$ calculates twice the "signed area" of the triangle. The term "signed area" means that the value of the determinant might be positive or negative, depending on the orientation of the vertices (clockwise or counter-clockwise).

Since the geometrical area of a triangle must always be a non-negative value, we take the absolute value (or modulus) of the determinant before multiplying by $\frac{1}{2}$. This ensures the calculated area is always positive or zero.

So, the procedure is:

  1. Set up the $3 \times 3$ determinant with the vertex coordinates and a column of ones.
  2. Evaluate the determinant.
  3. Take the absolute value of the determinant.
  4. Multiply the result by $\frac{1}{2}$.

Condition for Collinearity of Three Points

Three distinct points in a plane are said to be collinear if they lie on the same straight line. If three points are collinear, they cannot form a triangle with a positive area. In fact, the area of the degenerate triangle formed by three collinear points is zero.

Using the determinant formula for the area of a triangle, we can derive the condition for collinearity. Three points $(x_1, y_1)$, $(x_2, y_2)$, and $(x_3, y_3)$ are collinear if and only if the area of the triangle formed by these points is zero.

Area $= 0$

$\frac{1}{2} \left| \begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{vmatrix} \right| = 0$

This implies that the value of the determinant must be zero:

$\begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{vmatrix} = 0$

... (2)

Therefore, three points $(x_1, y_1)$, $(x_2, y_2)$, and $(x_3, y_3)$ are collinear if and only if the determinant $\begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{vmatrix}$ evaluates to zero.


Example 1. Find the area of the triangle with vertices $A(1, 0)$, $B(6, 0)$, and $C(4, 3)$.

Answer:

Let the given vertices be $(x_1, y_1) = (1, 0)$, $(x_2, y_2) = (6, 0)$, and $(x_3, y_3) = (4, 3)$.

Using the determinant formula for the area of a triangle (Equation 1):

Area $= \frac{1}{2} \left| \begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{vmatrix} \right| = \frac{1}{2} \left| \begin{vmatrix} 1 & 0 & 1 \\ 6 & 0 & 1 \\ 4 & 3 & 1 \end{vmatrix} \right|$

Now, we evaluate the determinant $\begin{vmatrix} 1 & 0 & 1 \\ 6 & 0 & 1 \\ 4 & 3 & 1 \end{vmatrix}$. We can expand along any row or column. Expanding along the second column ($C_2$) is efficient as it contains two zero elements.

$\begin{vmatrix} 1 & 0 & 1 \\ 6 & 0 & 1 \\ 4 & 3 & 1 \end{vmatrix} = a_{12}A_{12} + a_{22}A_{22} + a_{32}A_{32}$

$= 0 \cdot A_{12} + 0 \cdot A_{22} + 3 \cdot A_{32}$

$= 3 \cdot A_{32}$

Next, we calculate the cofactor $A_{32}$.

$A_{32} = (-1)^{3+2} M_{32} = (-1)^5 M_{32} = -1 \cdot M_{32}$

The minor $M_{32}$ is the determinant of the submatrix obtained by deleting the 3rd row and 2nd column:

$M_{32} = \begin{vmatrix} 1 & 1 \\ 6 & 1 \end{vmatrix}$

Using the $2 \times 2$ determinant formula:

$M_{32} = (1)(1) - (1)(6) = 1 - 6 = -5$

So, the cofactor $A_{32} = -1 \cdot (-5) = 5$.

Substitute this back into the determinant expansion:

$\begin{vmatrix} 1 & 0 & 1 \\ 6 & 0 & 1 \\ 4 & 3 & 1 \end{vmatrix} = 3 \cdot A_{32} = 3 \cdot 5 = 15$

Finally, calculate the area using Equation 1:

Area $= \frac{1}{2} \left| 15 \right| = \frac{1}{2} \times 15 = \frac{15}{2}$

The area of the triangle is $\frac{15}{2}$ square units.


Example 2. Show that the points $A(1, 1)$, $B(2, 3)$, and $C(3, 5)$ are collinear.

Answer:

To show that the points $A(1, 1)$, $B(2, 3)$, and $C(3, 5)$ are collinear, we need to verify if the determinant formed by their coordinates is zero (as per the condition for collinearity, Equation 2).

Let $(x_1, y_1) = (1, 1)$, $(x_2, y_2) = (2, 3)$, and $(x_3, y_3) = (3, 5)$.

Consider the determinant:

$\Delta = \begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{vmatrix} = \begin{vmatrix} 1 & 1 & 1 \\ 2 & 3 & 1 \\ 3 & 5 & 1 \end{vmatrix}$

We can evaluate this determinant using expansion or by using properties of determinants to simplify it first. Let's use properties to create zeros in the third column. Apply the row operations $R_2 \to R_2 - R_1$ and $R_3 \to R_3 - R_1$. (These operations do not change the value of the determinant by Property 6).

$\Delta = \begin{vmatrix} 1 & 1 & 1 \\ 2-1 & 3-1 & 1-1 \\ 3-1 & 5-1 & 1-1 \end{vmatrix} = \begin{vmatrix} 1 & 1 & 1 \\ 1 & 2 & 0 \\ 2 & 4 & 0 \end{vmatrix}$

Now, expand the determinant along the third column ($C_3$), which contains two zeros:

$\Delta = a_{13}A_{13} + a_{23}A_{23} + a_{33}A_{33}$

$\Delta = 1 \cdot A_{13} + 0 \cdot A_{23} + 0 \cdot A_{33}$

$\Delta = 1 \cdot A_{13}$

Calculate the cofactor $A_{13}$.

$A_{13} = (-1)^{1+3} M_{13} = (-1)^4 M_{13} = (+1) M_{13}$

The minor $M_{13}$ is the determinant of the submatrix obtained by deleting the 1st row and 3rd column of the *transformed* determinant:

$M_{13} = \begin{vmatrix} 1 & 2 \\ 2 & 4 \end{vmatrix}$

Using the $2 \times 2$ determinant formula:

$M_{13} = (1)(4) - (2)(2) = 4 - 4 = 0$

So, the cofactor $A_{13} = (+1) \cdot 0 = 0$.

Substitute this back into the determinant evaluation:

$\Delta = 1 \cdot A_{13} = 1 \cdot 0 = 0$

Since the value of the determinant is 0, the points $A(1, 1)$, $B(2, 3)$, and $C(3, 5)$ are collinear.

Alternate Answer (Using Direct Expansion):

We can also evaluate the determinant $\begin{vmatrix} 1 & 1 & 1 \\ 2 & 3 & 1 \\ 3 & 5 & 1 \end{vmatrix}$ by expanding directly along the first row:

$\Delta = 1 \cdot A_{11} + 1 \cdot A_{12} + 1 \cdot A_{13}$

Calculate the cofactors $A_{11}, A_{12}, A_{13}$:

$A_{11} = (-1)^{1+1} M_{11} = M_{11} = \begin{vmatrix} 3 & 1 \\ 5 & 1 \end{vmatrix} = (3)(1) - (1)(5) = 3 - 5 = -2$

$A_{12} = (-1)^{1+2} M_{12} = -M_{12} = -\begin{vmatrix} 2 & 1 \\ 3 & 1 \end{vmatrix} = -((2)(1) - (1)(3)) = -(2 - 3) = -(-1) = 1$

$A_{13} = (-1)^{1+3} M_{13} = M_{13} = \begin{vmatrix} 2 & 3 \\ 3 & 5 \end{vmatrix} = (2)(5) - (3)(3) = 10 - 9 = 1$

Substitute these cofactor values:

$\Delta = 1 \cdot (-2) + 1 \cdot (1) + 1 \cdot (1)$

$\Delta = -2 + 1 + 1 = 0$

Since the determinant is 0, the points are collinear. This confirms the result obtained using row operations.



Adjoint and Inverse of a Square Matrix

In the study of matrices, the concepts of adjoint and inverse are fundamental, particularly when dealing with systems of linear equations and matrix algebra. These concepts are defined specifically for square matrices.


Adjoint of a Matrix

Let $A = [a_{ij}]$ be a square matrix of order $n \times n$. The adjoint of matrix $A$, denoted by $\text{adj} A$, is defined as the transpose of the matrix formed by the cofactors of the elements of $A$.

First, we construct the matrix of cofactors of $A$. Let $A_{ij}$ be the cofactor of the element $a_{ij}$ in the matrix $A$. Recall that the cofactor $A_{ij} = (-1)^{i+j} M_{ij}$, where $M_{ij}$ is the minor of $a_{ij}$. The matrix of cofactors, let's call it $C$, is given by $C = [A_{ij}]_{n \times n}$.

The adjoint of $A$ is the transpose of this cofactor matrix $C$.

$\text{adj} A = C' = [A_{ji}]_{n \times n}$

... (1)

This means the element in the $i$-th row and $j$-th column of $\text{adj} A$ is the cofactor of the element $a_{ji}$ (which is located in the $j$-th row and $i$-th column) of the original matrix $A$.

Adjoint of a $2 \times 2$ Matrix

For a $2 \times 2$ matrix $A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}$, let's find the cofactors:

$A_{11} = (-1)^{1+1} M_{11} = (+1) \det([d]) = d$

$A_{12} = (-1)^{1+2} M_{12} = (-1) \det([c]) = -c$

$A_{21} = (-1)^{2+1} M_{21} = (-1) \det([b]) = -b$

$A_{22} = (-1)^{2+2} M_{22} = (+1) \det([a]) = a$

The matrix of cofactors is $\begin{pmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{pmatrix} = \begin{pmatrix} d & -c \\ -b & a \end{pmatrix}$.

The adjoint of $A$ is the transpose of this matrix:

$\text{adj} A = \begin{pmatrix} d & -c \\ -b & a \end{pmatrix}' = \begin{pmatrix} d & -b \\ -c & a \end{pmatrix}$

... (2)

From Equation 2, we can see a quick way to find the adjoint of a $2 \times 2$ matrix: swap the elements on the main diagonal and change the signs of the elements on the anti-diagonal.

Adjoint of a $3 \times 3$ Matrix

For a $3 \times 3$ matrix $A = \begin{pmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{pmatrix}$, calculating the adjoint involves computing all nine cofactors $A_{ij}$ and then taking the transpose of the resulting cofactor matrix.

The matrix of cofactors is $\begin{pmatrix} A_{11} & A_{12} & A_{13} \\ A_{21} & A_{22} & A_{23} \\ A_{31} & A_{32} & A_{33} \end{pmatrix}$.

The adjoint of $A$ is the transpose of this matrix:

$\text{adj} A = \begin{pmatrix} A_{11} & A_{12} & A_{13} \\ A_{21} & A_{22} & A_{23} \\ A_{31} & A_{32} & A_{33} \end{pmatrix}' = \begin{pmatrix} A_{11} & A_{21} & A_{31} \\ A_{12} & A_{22} & A_{32} \\ A_{13} & A_{23} & A_{33} \end{pmatrix}$

... (3)

Example 1. Find the adjoint of the matrix $A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}$.

Answer:

Given matrix $A = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}$. This is a $2 \times 2$ matrix.

Using the shortcut formula for a $2 \times 2$ adjoint (Equation 2), we swap the main diagonal elements (1 and 4) and change the signs of the off-diagonal elements (2 and 3):

$\text{adj} A = \begin{pmatrix} 4 & -2 \\ -3 & 1 \end{pmatrix}$

Alternate Answer (Using Definition):

Alternatively, we can find the cofactors of $A$:

$A_{11} = (-1)^{1+1} M_{11} = (+1) \det([4]) = 4$

$A_{12} = (-1)^{1+2} M_{12} = (-1) \det([3]) = -3$

$A_{21} = (-1)^{2+1} M_{21} = (-1) \det([2]) = -2$

$A_{22} = (-1)^{2+2} M_{22} = (+1) \det([1]) = 1$

The matrix of cofactors is $\begin{pmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{pmatrix} = \begin{pmatrix} 4 & -3 \\ -2 & 1 \end{pmatrix}$.

The adjoint is the transpose of the cofactor matrix:

$\text{adj} A = \begin{pmatrix} 4 & -3 \\ -2 & 1 \end{pmatrix}' = \begin{pmatrix} 4 & -2 \\ -3 & 1 \end{pmatrix}$

Both methods yield the same result.


Theorem: Product of a Matrix and its Adjoint

For any square matrix $A$ of order $n \times n$, the product of the matrix $A$ and its adjoint ($\text{adj} A$) is equal to the product of the determinant of $A$ and the identity matrix of order $n$. This holds true regardless of the order of multiplication ($A \cdot \text{adj} A$ or $\text{adj} A \cdot A$).

A $(\text{adj} A) = (\text{adj} A) A = |A| I_n$

... (4)

where $|A|$ is the determinant of $A$, and $I_n$ is the identity matrix of order $n$.

**Proof Idea (for a $3 \times 3$ matrix):**

Let $A = [a_{ij}]$ and $\text{adj} A = [B_{jk}]$, where $B_{jk} = A_{kj}$ (the cofactor of $a_{kj}$). The element in the $i$-th row and $k$-th column of the product matrix $A (\text{adj} A)$ is given by the sum of the products of elements from the $i$-th row of $A$ and the $k$-th column of $\text{adj} A$:

$[A (\text{adj} A)]_{ik} = \sum_{j=1}^{n} a_{ij} B_{jk} = \sum_{j=1}^{n} a_{ij} A_{kj}$

This sum has a special property:

Combining these results, the product matrix $A (\text{adj} A)$ has $|A|$ on the main diagonal and zeros elsewhere:

$A (\text{adj} A) = \begin{pmatrix} |A| & 0 & \cdots & 0 \\ 0 & |A| & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & |A| \end{pmatrix} = |A| \begin{pmatrix} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \end{pmatrix} = |A| I_n$

A similar argument can be used to show that $(\text{adj} A) A = |A| I_n$.


Inverse of a Matrix

Using the theorem $A (\text{adj} A) = (\text{adj} A) A = |A| I_n$, we can define the inverse of a matrix.

If $|A| \neq 0$, we can divide Equation 4 by $|A|$:

A $\left(\frac{1}{|A|} \text{adj} A\right) = \left(\frac{1}{|A|} \text{adj} A\right) A = \frac{|A|}{|A|} I_n = I_n$

... (5)

By the definition of the inverse of a matrix, if $AB = BA = I$, then $B$ is the inverse of $A$, denoted by $A^{-1}$. Comparing this definition with Equation 5, we can see that $\left(\frac{1}{|A|} \text{adj} A\right)$ acts as the inverse of $A$.

Therefore, the inverse of a square matrix $A$ is given by:

A$^{-1} = \frac{1}{|A|} (\text{adj} A)$

... (6)

This formula provides a direct method for calculating the inverse of a matrix, provided its determinant is non-zero.

Thus, a square matrix $A$ is invertible if and only if it is non-singular.

Example 2. Find the inverse of the matrix $A = \begin{pmatrix} 2 & 1 \\ 1 & 1 \end{pmatrix}$ using the adjoint method.

Answer:

Given matrix $A = \begin{pmatrix} 2 & 1 \\ 1 & 1 \end{pmatrix}$.

Step 1: Calculate the determinant of $A$.

$|A| = \begin{vmatrix} 2 & 1 \\ 1 & 1 \end{vmatrix} = (2)(1) - (1)(1) = 2 - 1 = 1$

Since $|A| = 1 \neq 0$, the matrix $A$ is non-singular, and its inverse exists.

Step 2: Calculate the adjoint of $A$. Using the shortcut for a $2 \times 2$ matrix (Equation 2):

$\text{adj} A = \begin{pmatrix} 1 & -1 \\ -1 & 2 \end{pmatrix}$

Step 3: Use the formula $A^{-1} = \frac{1}{|A|} (\text{adj} A)$ (Equation 6).

$A^{-1} = \frac{1}{1} \begin{pmatrix} 1 & -1 \\ -1 & 2 \end{pmatrix} = 1 \cdot \begin{pmatrix} 1 & -1 \\ -1 & 2 \end{pmatrix} = \begin{pmatrix} 1 & -1 \\ -1 & 2 \end{pmatrix}$

The inverse of the matrix $A$ is $\begin{pmatrix} 1 & -1 \\ -1 & 2 \end{pmatrix}$. We can verify this by checking $AA^{-1} = I$:

$A A^{-1} = \begin{pmatrix} 2 & 1 \\ 1 & 1 \end{pmatrix} \begin{pmatrix} 1 & -1 \\ -1 & 2 \end{pmatrix} = \begin{pmatrix} (2)(1) + (1)(-1) & (2)(-1) + (1)(2) \\ (1)(1) + (1)(-1) & (1)(-1) + (1)(2) \end{pmatrix} = \begin{pmatrix} 2 - 1 & -2 + 2 \\ 1 - 1 & -1 + 2 \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} = I_2$


Properties of Inverse Matrices

If $A$ and $B$ are invertible matrices of the same order $n$, the following properties hold:

  1. The inverse of the inverse of a matrix is the matrix itself:

    $(A^{-1})^{-1} = A$

  2. The inverse of the product of two matrices is the product of their inverses in the reverse order:

    $(AB)^{-1} = B^{-1}A^{-1}$

    This property can be extended to a product of multiple matrices: $(ABC)^{-1} = C^{-1}B^{-1}A^{-1}$, and so on.

  3. The inverse of the transpose of a matrix is equal to the transpose of its inverse:

    $(A')^{-1} = (A^{-1})'$

  4. The determinant of the inverse of a matrix is the reciprocal of the determinant of the original matrix:

    $\det(A^{-1}) = \frac{1}{\det(A)}$

    ... (7)

    Equivalently, $|A^{-1}| = \frac{1}{|A|}$.

    **Proof of Property 4:**

    By the definition of the inverse matrix, we know that $AA^{-1} = I_n$.

    Taking the determinant of both sides of this matrix equation:

    $\det(AA^{-1}) = \det(I_n)$

    Using Property 8 of determinants, which states that the determinant of a product of matrices is the product of their determinants ($\det(AB) = \det(A)\det(B)$), the left side becomes:

    $\det(A) \cdot \det(A^{-1})$

    The determinant of the identity matrix $I_n$ is 1. So the right side is:

    $\det(I_n) = 1$

    Equating the left and right sides:

    $\det(A) \cdot \det(A^{-1}) = 1$

    Since $A$ is invertible, it is non-singular, meaning $\det(A) \neq 0$. Therefore, we can divide both sides by $\det(A)$:

    $\det(A^{-1}) = \frac{1}{\det(A)}$

    This completes the proof of Property 4.



Solution of a System of Linear Equations

Determinants and matrix inverse provide a powerful method for solving systems of linear equations. A system of linear equations can be effectively represented and solved using matrices.


Matrix Representation of a System of Linear Equations

Consider a system of $n$ linear equations in $n$ variables $x_1, x_2, \dots, x_n$:

$a_{11}x_1 + a_{12}x_2 + \dots + a_{1n}x_n = b_1$

$a_{21}x_1 + a_{22}x_2 + \dots + a_{2n}x_n = b_2$

$\vdots$

$a_{n1}x_1 + a_{n2}x_2 + \dots + a_{nn}x_n = b_n$

This system can be concisely written in the matrix form $AX = B$. To understand this, let's define the matrices:

$A = \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{pmatrix}$ is the coefficient matrix, formed by the coefficients of the variables.

$X = \begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{pmatrix}$ is the variable matrix (or column vector), containing the variables.

$B = \begin{pmatrix} b_1 \\ b_2 \\ \vdots \\ b_n \end{pmatrix}$ is the constant matrix (or column vector), containing the constants on the right-hand side of the equations.

The matrix product $AX$ is defined as:

$AX = \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{pmatrix} = \begin{pmatrix} a_{11}x_1 + a_{12}x_2 + \dots + a_{1n}x_n \\ a_{21}x_1 + a_{22}x_2 + \dots + a_{2n}x_n \\ \vdots \\ a_{n1}x_1 + a_{n2}x_2 + \dots + a_{nn}x_n \end{pmatrix}$

When we set this product equal to the constant matrix $B$, i.e., $AX = B$, we get:

$\begin{pmatrix} a_{11}x_1 + a_{12}x_2 + \dots + a_{1n}x_n \\ a_{21}x_1 + a_{22}x_2 + \dots + a_{2n}x_n \\ \vdots \\ a_{n1}x_1 + a_{n2}x_2 + \dots + a_{nn}x_n \end{pmatrix} = \begin{pmatrix} b_1 \\ b_2 \\ \vdots \\ b_n \end{pmatrix}$

By the definition of matrix equality (two matrices are equal if their corresponding elements are equal), this single matrix equation is equivalent to the original system of $n$ linear equations. Thus, the matrix form $AX=B$ is a compact and convenient representation of the system.


Consistency of a System

A system of linear equations is said to be consistent if it has at least one solution. This means there exists a set of values for the variables ($x_1, x_2, \dots, x_n$) that satisfies all the equations simultaneously.

A system of linear equations is said to be inconsistent if it has no solution. There is no set of values for the variables that can satisfy all equations simultaneously.

Criterion for Consistency (using Determinant and Adjoint)

For a system of linear equations $AX=B$, where $A$ is a square matrix of order $n \times n$:

1. Case 1: If $|A| \neq 0$ (A is non-singular).

If the determinant of the coefficient matrix A is non-zero, the matrix A is invertible, and its inverse $A^{-1}$ exists and is unique. In this case, the system is consistent and has a unique solution. The solution is given by $X = A^{-1}B$.

Derivation of the unique solution:

Starting with the matrix equation:

$AX = B$

Since $|A| \neq 0$, $A^{-1}$ exists. Premultiply both sides of the equation by $A^{-1}$:

$A^{-1}(AX) = A^{-1}B$

Using the associativity property of matrix multiplication:

$(A^{-1}A)X = A^{-1}B$

[Associativity]

By the definition of the matrix inverse, $A^{-1}A = I$, where $I$ is the identity matrix of order $n \times n$:

$IX = A^{-1}B$

[Definition of inverse]

Multiplying any matrix $X$ by the identity matrix $I$ results in $X$:

$X = A^{-1}B$

... (i)

Since $A^{-1}$ is unique when $|A| \neq 0$ and B is a fixed matrix, the product $A^{-1}B$ gives a unique matrix $X$. This unique matrix $X$ represents the unique set of values for the variables ($x_1, x_2, \dots, x_n$) that solves the system.

2. Case 2: If $|A| = 0$ (A is singular).

If the determinant of the coefficient matrix A is zero, the matrix A is singular and is not invertible ($A^{-1}$ does not exist). In this case, the system of equations may be either consistent with infinitely many solutions or inconsistent with no solution. To distinguish between these possibilities, we need to examine the product $(\text{adj} A)B$.

Recall the fundamental property relating a matrix, its adjoint, and its determinant: $A (\text{adj} A) = (\text{adj} A) A = |A| I$.

Starting with the matrix equation $AX=B$, premultiply both sides by $\text{adj} A$:

$(\text{adj} A)(AX) = (\text{adj} A)B$

Using associativity:

$(\text{adj} A A)X = (\text{adj} A)B$

[Associativity]

Using the property $A (\text{adj} A) = (\text{adj} A) A = |A| I$:

$(\text{adj} A A)X = |A|IX$

[Theorem on Adjoint]

So the equation becomes:

$|A|IX = (\text{adj} A)B$

If $|A|=0$, the left side becomes $0 \cdot IX = O$, where $O$ is the zero matrix of the same dimensions as $(\text{adj} A)B$.

Thus, if $|A|=0$, the equation reduces to:

$O = (\text{adj} A)B$

This equation must hold for a solution $X$ to exist.

* Subcase 2a: If $|A| = 0$ and $(\text{adj} A)B \neq O$.

In this situation, the equation $O = (\text{adj} A)B$ becomes $O = \text{a non-zero matrix}$, which is a contradiction. This means that no matrix $X$ can satisfy the equation $AX=B$. Therefore, the system is inconsistent and has no solution.

* Subcase 2b: If $|A| = 0$ and $(\text{adj} A)B = O$.

In this situation, the equation $O = (\text{adj} A)B$ becomes $O = O$, which is always true. This means that the condition derived from premultiplying by $\text{adj} A$ does not rule out the possibility of a solution. When $|A|=0$ and $(\text{adj} A)B = O$, the equations are linearly dependent. This leads to the system being consistent with infinitely many solutions. This typically happens when one or more equations are linear combinations of the others, and $(\text{adj} A)B = O$ ensures that the constant terms are also consistent with these dependencies.

Summary Table for $AX=B$ where A is Square

Determinant of A ($|A|$) $(\text{adj} A)B$ Consistency Nature of Solution
$\neq 0$ (Calculation not required to check consistency) Consistent Unique Solution ($X = A^{-1}B$)
$= 0$ $\neq O$ Inconsistent No Solution
$= 0$ $= O$ Consistent Infinitely Many Solutions

This table provides a complete criterion for determining the consistency and the number of solutions for a system of $n$ linear equations in $n$ variables using the coefficient matrix determinant and adjoint.


Solution of Homogeneous Systems ($AX=O$)

A system of linear equations is called a homogeneous system if all the constant terms are zero. In matrix form, this is represented as $AX=O$, where $O$ is the zero matrix (a column vector of zeros).

Homogeneous systems are always consistent because they always have at least one solution, namely $X=O$ (i.e., $x_1=0, x_2=0, \dots, x_n=0$). This solution is called the trivial solution.

For a homogeneous system $AX=O$:

1. If $|A| \neq 0$, the system is consistent and has a unique solution. Using the matrix inverse method, $X = A^{-1}O = O$. Thus, the only solution is the trivial solution $X=O$.

2. If $|A| = 0$, the system is consistent and has infinitely many solutions. This is because if $|A|=0$, then $(\text{adj} A)O = O$ is always true. So, the criterion for infinite solutions is met. These solutions include the trivial solution $X=O$ and also other solutions where not all variables are zero, called non-trivial solutions. Finding these non-trivial solutions involves other techniques, often involving reducing the augmented matrix to row echelon form.


Example 1. Examine the consistency of the system of equations:

$2x + 3y = 5$

$x - 2y = -1$

Answer:

The given system of equations can be written in the matrix form $AX=B$, where:

$A = \begin{pmatrix} 2 & 3 \\ 1 & -2 \end{pmatrix}$, $X = \begin{pmatrix} x \\ y \end{pmatrix}$, $B = \begin{pmatrix} 5 \\ -1 \end{pmatrix}$

To examine consistency, we first calculate the determinant of the coefficient matrix A:

$|A| = \begin{vmatrix} 2 & 3 \\ 1 & -2 \end{vmatrix} = (2)(-2) - (3)(1) = -4 - 3 = -7$

Since $|A| = -7 \neq 0$, the matrix A is non-singular.

According to the consistency criterion, if $|A| \neq 0$, the system of equations is consistent and has a unique solution.

Therefore, the given system of equations is consistent.

(Note: The question only asks to examine consistency, not to find the solution. The unique solution could be found using $X = A^{-1}B$ if required).



Solution of Linear Equations by Determinants

In Class XII, we study two primary methods for solving a system of linear equations using determinants and matrix concepts: the Matrix Inverse Method and Cramer's Rule. These methods are applicable primarily when the system has a unique solution, which is determined by the non-singularity of the coefficient matrix.

Matrix Inverse Method

Consider a system of $n$ linear equations in $n$ variables. Such a system can be written in matrix form as $AX = B$, where:

$A$ is the $n \times n$ coefficient matrix.

$X$ is the $n \times 1$ column matrix of variables.

$B$ is the $n \times 1$ column matrix of constants.

For a system $AX=B$, if the coefficient matrix $A$ is non-singular (i.e., its determinant $|A| \neq 0$), then the system has a unique solution. This unique solution is given by the matrix equation:

$X = A^{-1}B$

... (A)

This is because if $|A| \neq 0$, the inverse matrix $A^{-1}$ exists. Multiplying the equation $AX=B$ by $A^{-1}$ from the left, we get:

$A^{-1}(AX) = A^{-1}B$

$(A^{-1}A)X = A^{-1}B$ (by associative property of matrix multiplication)

$IX = A^{-1}B$ (since $A^{-1}A = I$, the identity matrix)

$X = A^{-1}B$ (since $IX = X$)

Thus, the solution matrix $X$ is obtained by multiplying the inverse of the coefficient matrix $A$ with the constant matrix $B$.

The steps to solve a system of linear equations using the Matrix Inverse Method are:

1. Write the given system of linear equations in the matrix form $AX = B$. Identify the matrices $A$, $X$, and $B$.

2. Calculate the determinant of the coefficient matrix, $|A|$.

3. Check the value of $|A|$:

- If $|A| \neq 0$, then $A$ is non-singular, and a unique solution exists. Proceed to step 4.

- If $|A| = 0$, then $A$ is singular, and the system either has no solution (inconsistent) or infinitely many solutions (consistent). In this case, the Matrix Inverse Method ($X=A^{-1}B$) cannot be used to find a unique solution. To determine consistency, we check the value of $(\text{adj} A)B$.

- If $|A|=0$ and $(\text{adj} A)B \neq O$ (the zero matrix), the system is inconsistent.

- If $|A|=0$ and $(\text{adj} A)B = O$, the system is consistent and has infinitely many solutions.

4. If $|A| \neq 0$, calculate the inverse of the matrix $A$ using the formula $A^{-1} = \frac{1}{|A|} (\text{adj} A)$. Recall that $\text{adj} A$ is the adjoint of matrix A, which is the transpose of the cofactor matrix of A.

5. Calculate the product $A^{-1}B$. This multiplication will result in a column matrix $X$.

6. Equate the elements of the resulting matrix $X$ with the variables in the variable matrix $X$ to obtain the values of the variables.


Cramer's Rule

Cramer's Rule is a method for solving a system of linear equations using determinants, provided the system has a unique solution. It offers explicit formulas for the values of the variables. This rule is valid only when the determinant of the coefficient matrix is non-zero.

Cramer's Rule for 2 variables

Consider a system of two linear equations in two variables $x$ and $y$:

$a_1 x + b_1 y = c_1$

$a_2 x + b_2 y = c_2$

This system can be represented by the coefficient matrix $A = \begin{pmatrix} a_1 & b_1 \\ a_2 & b_2 \end{pmatrix}$, the variable matrix $X = \begin{pmatrix} x \\ y \end{pmatrix}$, and the constant matrix $B = \begin{pmatrix} c_1 \\ c_2 \end{pmatrix}$.

Define the following determinants:

$D = \begin{vmatrix} a_1 & b_1 \\ a_2 & b_2 \end{vmatrix}$ (This is the determinant of the coefficient matrix $A$, i.e., $D = |A|$.)

$D_x = \begin{vmatrix} c_1 & b_1 \\ c_2 & b_2 \end{vmatrix}$ (This determinant is obtained by replacing the first column (coefficients of $x$) of $D$ with the column of constants $B$.)

$D_y = \begin{vmatrix} a_1 & c_1 \\ a_2 & c_2 \end{vmatrix}$ (This determinant is obtained by replacing the second column (coefficients of $y$) of $D$ with the column of constants $B$.)

If $D \neq 0$, the system has a unique solution given by:

$x = \frac{D_x}{D}$, $y = \frac{D_y}{D}$

... (1)

If $D=0$:

- If $D_x \neq 0$ or $D_y \neq 0$, the system is inconsistent (no solution). Geometrically, this corresponds to parallel and distinct lines.

- If $D_x = 0$ and $D_y = 0$, the system is consistent and has infinitely many solutions. Geometrically, this corresponds to coincident lines.

Cramer's Rule for 3 variables

Consider a system of three linear equations in three variables $x$, $y$, and $z$:

$a_1 x + b_1 y + c_1 z = d_1$

$a_2 x + b_2 y + c_2 z = d_2$

$a_3 x + b_3 y + c_3 z = d_3$

This system can be represented by the coefficient matrix $A = \begin{pmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{pmatrix}$, the variable matrix $X = \begin{pmatrix} x \\ y \\ z \end{pmatrix}$, and the constant matrix $B = \begin{pmatrix} d_1 \\ d_2 \\ d_3 \end{pmatrix}$.

Define the following determinants:

$D = \begin{vmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ a_3 & b_3 & c_3 \end{vmatrix}$ (Determinant of the coefficient matrix $A$, i.e., $D = |A|$.)

$D_x = \begin{vmatrix} d_1 & b_1 & c_1 \\ d_2 & b_2 & c_2 \\ d_3 & b_3 & c_3 \end{vmatrix}$ (Replace the 1st column (coefficients of $x$) of $D$ with the column of constants $B$.)

$D_y = \begin{vmatrix} a_1 & d_1 & c_1 \\ a_2 & d_2 & c_2 \\ a_3 & d_3 & c_3 \end{vmatrix}$ (Replace the 2nd column (coefficients of $y$) of $D$ with the column of constants $B$.)

$D_z = \begin{vmatrix} a_1 & b_1 & d_1 \\ a_2 & b_2 & d_2 \\ a_3 & b_3 & d_3 \end{vmatrix}$ (Replace the 3rd column (coefficients of $z$) of $D$ with the column of constants $B$.)

If $D \neq 0$, the system has a unique solution given by:

$x = \frac{D_x}{D}$, $y = \frac{D_y}{D}$, $z = \frac{D_z}{D}$

... (2)

If $D=0$:

- If $D_x \neq 0$ or $D_y \neq 0$ or $D_z \neq 0$, the system is inconsistent (no solution).

- If $D_x = D_y = D_z = 0$, the system is consistent and has infinitely many solutions.

Both the Matrix Inverse Method and Cramer's Rule rely on the determinant of the coefficient matrix being non-zero for a unique solution. The Matrix Inverse Method is often more versatile as it provides the inverse matrix, which can be useful in other contexts. Cramer's Rule offers a more direct formula for finding the values of the variables using determinants alone.


Example 1. Solve the following system of linear equations using the matrix method:

$2x + y = 5$

$x - 3y = -4$

Answer:

Given the system of equations:

$2x + y = 5$

... (a)

$x - 3y = -4$

... (b)

We can write this system in the matrix form $AX=B$, where:

$A = \begin{pmatrix} 2 & 1 \\ 1 & -3 \end{pmatrix}$ (Coefficient matrix)

$X = \begin{pmatrix} x \\ y \end{pmatrix}$ (Variable matrix)

$B = \begin{pmatrix} 5 \\ -4 \end{pmatrix}$ (Constant matrix)

First, calculate the determinant of the coefficient matrix $A$:

$|A| = \begin{vmatrix} 2 & 1 \\ 1 & -3 \end{vmatrix} = (2)(-3) - (1)(1) = -6 - 1 = -7$

Since $|A| = -7 \neq 0$, the matrix $A$ is non-singular, and the system has a unique solution given by $X = A^{-1}B$.

Next, find the inverse of matrix $A$, $A^{-1}$. For a $2 \times 2$ matrix $\begin{pmatrix} a & b \\ c & d \end{pmatrix}$, the inverse is $\frac{1}{ad-bc} \begin{pmatrix} d & -b \\ -c & a \end{pmatrix}$.

In our case, $A = \begin{pmatrix} 2 & 1 \\ 1 & -3 \end{pmatrix}$, so $a=2, b=1, c=1, d=-3$. The determinant is $ad-bc = (2)(-3) - (1)(1) = -7$.

The adjoint of $A$ is $\text{adj} A = \begin{pmatrix} -3 & -1 \\ -1 & 2 \end{pmatrix}$.

$A^{-1} = \frac{1}{|A|} (\text{adj} A) = \frac{1}{-7} \begin{pmatrix} -3 & -1 \\ -1 & 2 \end{pmatrix}$

Now, calculate the solution matrix $X$ using $X = A^{-1}B$:

$X = \frac{1}{-7} \begin{pmatrix} -3 & -1 \\ -1 & 2 \end{pmatrix} \begin{pmatrix} 5 \\ -4 \end{pmatrix}$

Multiply the matrices:

$X = -\frac{1}{7} \begin{pmatrix} (-3)(5) + (-1)(-4) \\ (-1)(5) + (2)(-4) \end{pmatrix}$

$X = -\frac{1}{7} \begin{pmatrix} -15 + 4 \\ -5 - 8 \end{pmatrix}$

$X = -\frac{1}{7} \begin{pmatrix} -11 \\ -13 \end{pmatrix}$

Multiply the scalar $-\frac{1}{7}$ with each element of the matrix:

$X = \begin{pmatrix} (-\frac{1}{7}) \times (-11) \\ (-\frac{1}{7}) \times (-13) \end{pmatrix} = \begin{pmatrix} \frac{11}{7} \\ \frac{13}{7} \end{pmatrix}$

Since $X = \begin{pmatrix} x \\ y \end{pmatrix}$ and $X = \begin{pmatrix} \frac{11}{7} \\ \frac{13}{7} \end{pmatrix}$, by comparing the corresponding elements, we get:

$x = \frac{11}{7}$

$y = \frac{13}{7}$

Thus, the unique solution to the given system of equations is $x = \frac{11}{7}$ and $y = \frac{13}{7}$.


Example 2. Solve the following system of linear equations using Cramer's rule:

$3x + 2y = 4$

$-x + y = 2$

Answer:

Given the system of equations:

$3x + 2y = 4$

... (a)

$-x + y = 2$

... (b)

Comparing these equations with $a_1 x + b_1 y = c_1$ and $a_2 x + b_2 y = c_2$, we have:

$a_1 = 3, b_1 = 2, c_1 = 4$

$a_2 = -1, b_2 = 1, c_2 = 2$

Calculate the determinant $D$ of the coefficient matrix:

$D = \begin{vmatrix} a_1 & b_1 \\ a_2 & b_2 \end{vmatrix} = \begin{vmatrix} 3 & 2 \\ -1 & 1 \end{vmatrix}$

$D = (3)(1) - (2)(-1) = 3 - (-2) = 3 + 2 = 5$

Since $D = 5 \neq 0$, a unique solution exists for the system. We can use Cramer's rule.

Calculate the determinant $D_x$ by replacing the first column of $D$ with the constants:

$D_x = \begin{vmatrix} c_1 & b_1 \\ c_2 & b_2 \end{vmatrix} = \begin{vmatrix} 4 & 2 \\ 2 & 1 \end{vmatrix}$

$D_x = (4)(1) - (2)(2) = 4 - 4 = 0$

Calculate the determinant $D_y$ by replacing the second column of $D$ with the constants:

$D_y = \begin{vmatrix} a_1 & c_1 \\ a_2 & c_2 \end{vmatrix} = \begin{vmatrix} 3 & 4 \\ -1 & 2 \end{vmatrix}$

$D_y = (3)(2) - (4)(-1) = 6 - (-4) = 6 + 4 = 10$

Using Cramer's rule formula for $x$ and $y$:

$x = \frac{D_x}{D} = \frac{0}{5} = 0$

... (using 1)

$y = \frac{D_y}{D} = \frac{10}{5} = 2$

... (using 1)

The unique solution to the given system of equations is $x=0$ and $y=2$.

We can verify the solution by substituting $x=0$ and $y=2$ into the original equations:

Equation (a): $3(0) + 2(2) = 0 + 4 = 4$ (Matches the right side)

Equation (b): $-(0) + (2) = 0 + 2 = 2$ (Matches the right side)

Since both equations are satisfied, the solution is correct.


Example 3. Solve the following system of linear equations:

$x + y + z = 1$

$2x + 3y + 2z = 2$

$ax + ay + 2az = 4$

Answer:

Given the system of equations:

$x + y + z = 1$

... (a)

$2x + 3y + 2z = 2$

... (b)

$ax + ay + 2az = 4$

... (c)

We write the system in matrix form $AX=B$:

$A = \begin{pmatrix} 1 & 1 & 1 \\ 2 & 3 & 2 \\ a & a & 2a \end{pmatrix}$, $X = \begin{pmatrix} x \\ y \\ z \end{pmatrix}$, $B = \begin{pmatrix} 1 \\ 2 \\ 4 \end{pmatrix}$

Calculate the determinant of the coefficient matrix $A$:

$|A| = \begin{vmatrix} 1 & 1 & 1 \\ 2 & 3 & 2 \\ a & a & 2a \end{vmatrix}$

Expanding along R1:

$|A| = 1 \begin{vmatrix} 3 & 2 \\ a & 2a \end{vmatrix} - 1 \begin{vmatrix} 2 & 2 \\ a & 2a \end{vmatrix} + 1 \begin{vmatrix} 2 & 3 \\ a & a \end{vmatrix}$

$|A| = 1((3)(2a) - (2)(a)) - 1((2)(2a) - (2)(a)) + 1((2)(a) - (3)(a))$

$|A| = (6a - 2a) - (4a - 2a) + (2a - 3a)$

$|A| = 4a - 2a - a = a$

Case 1: If $|A| \neq 0$, i.e., $a \neq 0$.

The system has a unique solution. We can use the Matrix Inverse Method $X = A^{-1}B$.

First, find the cofactor matrix of A:

$C_{11} = \begin{vmatrix} 3 & 2 \\ a & 2a \end{vmatrix} = 6a - 2a = 4a$

$C_{12} = - \begin{vmatrix} 2 & 2 \\ a & 2a \end{vmatrix} = -(4a - 2a) = -2a$

$C_{13} = \begin{vmatrix} 2 & 3 \\ a & a \end{vmatrix} = 2a - 3a = -a$

$C_{21} = - \begin{vmatrix} 1 & 1 \\ a & 2a \end{vmatrix} = -(2a - a) = -a$

$C_{22} = \begin{vmatrix} 1 & 1 \\ a & 2a \end{vmatrix} = 2a - a = a$

$C_{23} = - \begin{vmatrix} 1 & 1 \\ a & a \end{vmatrix} = -(a - a) = 0$

$C_{31} = \begin{vmatrix} 1 & 1 \\ 3 & 2 \end{vmatrix} = 2 - 3 = -1$

$C_{32} = - \begin{vmatrix} 1 & 1 \\ 2 & 2 \end{vmatrix} = -(2 - 2) = 0$

$C_{33} = \begin{vmatrix} 1 & 1 \\ 2 & 3 \end{vmatrix} = 3 - 2 = 1$

The cofactor matrix is $\begin{pmatrix} 4a & -2a & -a \\ -a & a & 0 \\ -1 & 0 & 1 \end{pmatrix}$.

The adjoint of A is the transpose of the cofactor matrix:

$\text{adj} A = \begin{pmatrix} 4a & -a & -1 \\ -2a & a & 0 \\ -a & 0 & 1 \end{pmatrix}$

The inverse of A is $A^{-1} = \frac{1}{|A|} (\text{adj} A)$:

$A^{-1} = \frac{1}{a} \begin{pmatrix} 4a & -a & -1 \\ -2a & a & 0 \\ -a & 0 & 1 \end{pmatrix}$ (for $a \neq 0$)

Now, calculate $X = A^{-1}B$:

$X = \frac{1}{a} \begin{pmatrix} 4a & -a & -1 \\ -2a & a & 0 \\ -a & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 \\ 2 \\ 4 \end{pmatrix}$

$X = \frac{1}{a} \begin{pmatrix} (4a)(1) + (-a)(2) + (-1)(4) \\ (-2a)(1) + (a)(2) + (0)(4) \\ (-a)(1) + (0)(2) + (1)(4) \end{pmatrix}$

$X = \frac{1}{a} \begin{pmatrix} 4a - 2a - 4 \\ -2a + 2a + 0 \\ -a + 0 + 4 \end{pmatrix}$

$X = \frac{1}{a} \begin{pmatrix} 2a - 4 \\ 0 \\ 4 - a \end{pmatrix}$

$X = \begin{pmatrix} \frac{2a - 4}{a} \\ \frac{0}{a} \\ \frac{4 - a}{a} \end{pmatrix} = \begin{pmatrix} 2 - \frac{4}{a} \\ 0 \\ \frac{4}{a} - 1 \end{pmatrix}$

Since $X = \begin{pmatrix} x \\ y \\ z \end{pmatrix}$, the unique solution for $a \neq 0$ is:

$x = 2 - \frac{4}{a}$

$y = 0$

$z = \frac{4}{a} - 1$

Case 2: If $|A| = 0$, i.e., $a = 0$.

In this case, the Matrix Inverse Method cannot be used directly. We check the consistency by evaluating $(\text{adj} A)B$.

When $a=0$, the adjoint matrix calculated above becomes:

$\text{adj} A |_{a=0} = \begin{pmatrix} 4(0) & -(0) & -1 \\ -2(0) & (0) & 0 \\ -(0) & 0 & 1 \end{pmatrix} = \begin{pmatrix} 0 & 0 & -1 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}$

Now, calculate $(\text{adj} A)B$:

$(\text{adj} A)B = \begin{pmatrix} 0 & 0 & -1 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 1 \\ 2 \\ 4 \end{pmatrix}$

$= \begin{pmatrix} (0)(1) + (0)(2) + (-1)(4) \\ (0)(1) + (0)(2) + (0)(4) \\ (0)(1) + (0)(2) + (1)(4) \end{pmatrix} = \begin{pmatrix} -4 \\ 0 \\ 4 \end{pmatrix}$

Since $|A|=0$ and $(\text{adj} A)B = \begin{pmatrix} -4 \\ 0 \\ 4 \end{pmatrix} \neq O$ (the zero matrix $\begin{pmatrix} 0 \\ 0 \\ 0 \end{pmatrix}$), the system is inconsistent when $a=0$. This means there is no solution when $a=0$.

Summary of results:

- If $a \neq 0$, the system has a unique solution given by $x = 2 - \frac{4}{a}$, $y = 0$, $z = \frac{4}{a} - 1$.

- If $a = 0$, the system is inconsistent and has no solution.